218 research outputs found

    Dynamics of fintech terms in news and blogs and specialization of companies of the fintech industry

    Get PDF
    We perform a large scale analysis of a list of fintech terms in (i) news and blogs in the English language and (ii) professional descriptions of companies operating in many countries. The occurrence and the co-occurrence of fintech terms and locutions show a progressive evolution of the list of fintech terms in a compact and coherent set of terms used worldwide to describe fintech business activities. By using methods of complex networks that are specifically designed to deal with heterogeneous systems, our analysis of a large set of professional descriptions of companies shows that companies having fintech terms in their description present over-expressions of specific attributes of country, municipality, and economic sector. By using the approach of statistically validated networks, we detect geographical and economic over-expressions of a set of companies related to the multi-industry, geographically, and economically distributed fintech movement

    Optimal Computation of Avoided Words

    Get PDF
    The deviation of the observed frequency of a word ww from its expected frequency in a given sequence xx is used to determine whether or not the word is avoided. This concept is particularly useful in DNA linguistic analysis. The value of the standard deviation of ww, denoted by std(w)std(w), effectively characterises the extent of a word by its edge contrast in the context in which it occurs. A word ww of length k>2k>2 is a ρ\rho-avoided word in xx if std(w)ρstd(w) \leq \rho, for a given threshold ρ<0\rho < 0. Notice that such a word may be completely absent from xx. Hence computing all such words na\"{\i}vely can be a very time-consuming procedure, in particular for large kk. In this article, we propose an O(n)O(n)-time and O(n)O(n)-space algorithm to compute all ρ\rho-avoided words of length kk in a given sequence xx of length nn over a fixed-sized alphabet. We also present a time-optimal O(σn)O(\sigma n)-time and O(σn)O(\sigma n)-space algorithm to compute all ρ\rho-avoided words (of any length) in a sequence of length nn over an alphabet of size σ\sigma. Furthermore, we provide a tight asymptotic upper bound for the number of ρ\rho-avoided words and the expected length of the longest one. We make available an open-source implementation of our algorithm. Experimental results, using both real and synthetic data, show the efficiency of our implementation

    L\'evy flights of photons in hot atomic vapours

    Full text link
    Properties of random and fluctuating systems are often studied through the use of Gaussian distributions. However, in a number of situations, rare events have drastic consequences, which can not be explained by Gaussian statistics. Considerable efforts have thus been devoted to the study of non Gaussian fluctuations such as L\'evy statistics, generalizing the standard description of random walks. Unfortunately only macroscopic signatures, obtained by averaging over many random steps, are usually observed in physical systems. We present experimental results investigating the elementary process of anomalous diffusion of photons in hot atomic vapours. We measure the step size distribution of the random walk and show that it follows a power law characteristic of L\'evy flights.Comment: This final version is identical to the one published in Nature Physic

    Statistically validated networks in bipartite complex systems

    Get PDF
    Many complex systems present an intrinsic bipartite nature and are often described and modeled in terms of networks [1-5]. Examples include movies and actors [1, 2, 4], authors and scientific papers [6-9], email accounts and emails [10], plants and animals that pollinate them [11, 12]. Bipartite networks are often very heterogeneous in the number of relationships that the elements of one set establish with the elements of the other set. When one constructs a projected network with nodes from only one set, the system heterogeneity makes it very difficult to identify preferential links between the elements. Here we introduce an unsupervised method to statistically validate each link of the projected network against a null hypothesis taking into account the heterogeneity of the system. We apply our method to three different systems, namely the set of clusters of orthologous genes (COG) in completely sequenced genomes [13, 14], a set of daily returns of 500 US financial stocks, and the set of world movies of the IMDb database [15]. In all these systems, both different in size and level of heterogeneity, we find that our method is able to detect network structures which are informative about the system and are not simply expression of its heterogeneity. Specifically, our method (i) identifies the preferential relationships between the elements, (ii) naturally highlights the clustered structure of investigated systems, and (iii) allows to classify links according to the type of statistically validated relationships between the connected nodes.Comment: Main text: 13 pages, 3 figures, and 1 Table. Supplementary information: 15 pages, 3 figures, and 2 Table

    Fractal Profit Landscape of the Stock Market

    Get PDF
    We investigate the structure of the profit landscape obtained from the most basic, fluctuation based, trading strategy applied for the daily stock price data. The strategy is parameterized by only two variables, p and q. Stocks are sold and bought if the log return is bigger than p and less than -q, respectively. Repetition of this simple strategy for a long time gives the profit defined in the underlying two-dimensional parameter space of p and q. It is revealed that the local maxima in the profit landscape are spread in the form of a fractal structure. The fractal structure implies that successful strategies are not localized to any region of the profit landscape and are neither spaced evenly throughout the profit landscape, which makes the optimization notoriously hard and hypersensitive for partial or limited information. The concrete implication of this property is demonstrated by showing that optimization of one stock for future values or other stocks renders worse profit than a strategy that ignores fluctuations, i.e., a long-term buy-and-hold strategy.Comment: 12 pages, 4 figure

    Dominating Clasp of the Financial Sector Revealed by Partial Correlation Analysis of the Stock Market

    Get PDF
    What are the dominant stocks which drive the correlations present among stocks traded in a stock market? Can a correlation analysis provide an answer to this question? In the past, correlation based networks have been proposed as a tool to uncover the underlying backbone of the market. Correlation based networks represent the stocks and their relationships, which are then investigated using different network theory methodologies. Here we introduce a new concept to tackle the above question—the partial correlation network. Partial correlation is a measure of how the correlation between two variables, e.g., stock returns, is affected by a third variable. By using it we define a proxy of stock influence, which is then used to construct partial correlation networks. The empirical part of this study is performed on a specific financial system, namely the set of 300 highly capitalized stocks traded at the New York Stock Exchange, in the time period 2001–2003. By constructing the partial correlation network, unlike the case of standard correlation based networks, we find that stocks belonging to the financial sector and, in particular, to the investment services sub-sector, are the most influential stocks affecting the correlation profile of the system. Using a moving window analysis, we find that the strong influence of the financial stocks is conserved across time for the investigated trading period. Our findings shed a new light on the underlying mechanisms and driving forces controlling the correlation profile observed in a financial market

    Optimal leverage from non-ergodicity

    Full text link
    In modern portfolio theory, the balancing of expected returns on investments against uncertainties in those returns is aided by the use of utility functions. The Kelly criterion offers another approach, rooted in information theory, that always implies logarithmic utility. The two approaches seem incompatible, too loosely or too tightly constraining investors' risk preferences, from their respective perspectives. The conflict can be understood on the basis that the multiplicative models used in both approaches are non-ergodic which leads to ensemble-average returns differing from time-average returns in single realizations. The classic treatments, from the very beginning of probability theory, use ensemble-averages, whereas the Kelly-result is obtained by considering time-averages. Maximizing the time-average growth rates for an investment defines an optimal leverage, whereas growth rates derived from ensemble-average returns depend linearly on leverage. The latter measure can thus incentivize investors to maximize leverage, which is detrimental to time-average growth and overall market stability. The Sharpe ratio is insensitive to leverage. Its relation to optimal leverage is discussed. A better understanding of the significance of time-irreversibility and non-ergodicity and the resulting bounds on leverage may help policy makers in reshaping financial risk controls.Comment: 17 pages, 3 figures. Updated figures and extended discussion of ergodicit

    Study of statistical correlations in intraday and daily financial return time series

    Full text link
    The aim of this article is to briefly review and make new studies of correlations and co-movements of stocks, so as to understand the "seasonalities" and market evolution. Using the intraday data of the CAC40, we begin by reasserting the findings of Allez and Bouchaud [New J. Phys. 13, 025010 (2011)]: the average correlation between stocks increases throughout the day. We then use multidimensional scaling (MDS) in generating maps and visualizing the dynamic evolution of the stock market during the day. We do not find any marked difference in the structure of the market during a day. Another aim is to use daily data for MDS studies, and visualize or detect specific sectors in a market and periods of crisis. We suggest that this type of visualization may be used in identifying potential pairs of stocks for "pairs trade".Comment: 22 pages, 11 figures, Springer-Verlag format. To appear in the conference proceedings of Econophys-Kolkata VI: "Econophysics of systemic risk and network dynamics", Eds. F. Abergel, B.K. Chakrabarti, A. Chakraborti and A. Ghosh, to be published by Springer-Verlag (Italia), Milan (2012

    Comprehensive Analysis of Market Conditions in the Foreign Exchange Market: Fluctuation Scaling and Variance-Covariance Matrix

    Get PDF
    We investigate quotation and transaction activities in the foreign exchange market for every week during the period of June 2007 to December 2010. A scaling relationship between the mean values of number of quotations (or number of transactions) for various currency pairs and the corresponding standard deviations holds for a majority of the weeks. However, the scaling breaks in some time intervals, which is related to the emergence of market shocks. There is a monotonous relationship between values of scaling indices and global averages of currency pair cross-correlations when both quantities are observed for various window lengths Δt\Delta t.Comment: 13 pages, 10 figure
    corecore